Large Scale Distributed Neural Network Training through Online Distillation
ثبت نشده
چکیده
Techniques such as ensembling and distillation promise model quality improvements when paired with almost any base model. However, due to increased testtime cost (for ensembles) and increased complexity of the training pipeline (for distillation), these techniques are challenging to use in industrial settings. In this paper we explore a variant of distillation which is relatively straightforward to use as it does not require a complicated multi-stage setup or many new hyperparameters. Our first claim is that online distillation enables us to use extra parallelism to fit very large datasets about twice as fast. Crucially, we can still speed up training even after we have already reached the point at which additional parallelism provides no benefit for synchronous or asynchronous stochastic gradient descent. Two neural networks trained on disjoint subsets of the data can share knowledge by encouraging each model to agree with the predictions the other model would have made. These predictions can come from a stale version of the other model so they can be safely computed using weights that only rarely get transmitted. Our second claim is that online distillation is a cost-effective way to make the exact predictions of a model dramatically more reproducible. We support our claims using experiments on the Criteo Display Ad Challenge dataset and the largest todate dataset used for neural language modeling, containing 6 × 10 tokens and based on the Common Crawl repository of web data.
منابع مشابه
Large Scale Distributed Neural Network Training through Online Distillation
Techniques such as ensembling and distillation promise model quality improvements when paired with almost any base model. However, due to increased testtime cost (for ensembles) and increased complexity of the training pipeline (for distillation), these techniques are challenging to use in industrial settings. In this paper we explore a variant of distillation which is relatively straightforwar...
متن کاملLarge Scale Distributed Neural Network Training through Online Distillation
Techniques such as ensembling and distillation promise model quality improvements when paired with almost any base model. However, due to increased testtime cost (for ensembles) and increased complexity of the training pipeline (for distillation), these techniques are challenging to use in industrial settings. In this paper we explore a variant of distillation which is relatively straightforwar...
متن کاملSimulation and Control of an Aromatic Distillation Column
In general, the objective of distillation control is to maintain the desired products quality. In this paper, the performances of different one point control strategies for an aromatic distillation column have been compared through dynamic simulation. These methods are: a) Composition control using measured composition directly. This method sufferes from large sampling delay of measuring de...
متن کاملOnline Composition Prediction of a Debutanizer Column Using Artificial Neural Network
The current method for composition measurement of an industrial distillation column includes an offline method, which is slow, tedious and could lead to inaccurate results. Among advantages of using online composition designed are to overcome the long time delay introduced by laboratory sampling and provide better estimation, which is suitable for online monitoring purposes. This paper pres...
متن کاملRadial Basis Neural Network Based Islanding Detection in Distributed Generation
This article presents a Radial Basis Neural Network (RBNN) based islanding detection technique. Islanding detection and prevention is a mandatory requirement for grid-connected distributed generation (DG) systems. Several methods based on passive and active detection scheme have been proposed. While passive schemes have a large non detection zone (NDZ), concern has been raised on active method ...
متن کامل